deeper better only
Is Deeper Better only when Shallow is Good?
Understanding the power of depth in feed-forward neural networks is an ongoing challenge in the field of deep learning theory. While current works account for the importance of depth for the expressive power of neural-networks, it remains an open question whether these benefits are exploited during a gradient-based optimization process. In this work we explore the relation between expressivity properties of deep networks and the ability to train them efficiently using gradient-based algorithms. We give a depth separation argument for distributions with fractal structure, showing that they can be expressed efficiently by deep networks, but not with shallow ones. These distributions have a natural coarse-to-fine structure, and we show that the balance between the coarse and fine details has a crucial effect on whether the optimization process is likely to succeed. We prove that when the distribution is concentrated on the fine details, gradient-based algorithms are likely to fail. Using this result we prove that, at least in some distributions, the success of learning deep networks depends on whether the distribution can be approximated by shallower networks, and we conjecture that this property holds in general.
Reviews: Is Deeper Better only when Shallow is Good?
This is a good paper that suggests excellent directions for new work. The key point is captured in this statement: "we conjecture that a distribution which cannot be approximated by a shallow network cannot be learned using a gradient-based algorithm, even when using a deep architecture." The authors provide first steps towards investigating this claim. There has been a small amount of work on the typical expressivity of neural networks, in addition to the "worst-case approach." See the papers "Complexity of linear regions in deep networks" and "Deep ReLU Networks Have Surprisingly Few Activation Patterns" by Hanin and Rolnick, which prove that while the number of linear regions can be made to grow exponentially with the depth, the typical number of linear regions is much smaller. See also "Do deep nets really need to be deep?" by Ba and Caruana, which indicates that once deep networks have learned a function, shallow networks can often be trained to distill the deep networks without appreciable performance loss.
Reviews: Is Deeper Better only when Shallow is Good?
This paper investigates the effect of depth of expressivity and learnability, given a distribution generated by an iterated function system. In particular, they showed that shallow networks need an exponential number of neurons to realize a fractal distribution while deep networks only require a number of neurons that is linear with the depth of the fractal distribution. The results are interesting and could shed some lights on the theoretical understanding of deep learning. So, the reviewers have shown their support to this paper, despite that it studies a mathematically narrow case whose practical value is not very clear. The impact of the work will be greatly improved if the authors could extend their studies to more general cases.
Is Deeper Better only when Shallow is Good?
Understanding the power of depth in feed-forward neural networks is an ongoing challenge in the field of deep learning theory. While current works account for the importance of depth for the expressive power of neural-networks, it remains an open question whether these benefits are exploited during a gradient-based optimization process. In this work we explore the relation between expressivity properties of deep networks and the ability to train them efficiently using gradient-based algorithms. We give a depth separation argument for distributions with fractal structure, showing that they can be expressed efficiently by deep networks, but not with shallow ones. These distributions have a natural coarse-to-fine structure, and we show that the balance between the coarse and fine details has a crucial effect on whether the optimization process is likely to succeed.
Is Deeper Better only when Shallow is Good?
Malach, Eran, Shalev-Shwartz, Shai
Understanding the power of depth in feed-forward neural networks is an ongoing challenge in the field of deep learning theory. While current works account for the importance of depth for the expressive power of neural-networks, it remains an open question whether these benefits are exploited during a gradient-based optimization process. In this work we explore the relation between expressivity properties of deep networks and the ability to train them efficiently using gradient-based algorithms. We give a depth separation argument for distributions with fractal structure, showing that they can be expressed efficiently by deep networks, but not with shallow ones. These distributions have a natural coarse-to-fine structure, and we show that the balance between the coarse and fine details has a crucial effect on whether the optimization process is likely to succeed.